smallest singular value
Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Improving out-of-distribution (OOD) generalization during in-distribution (ID) adaptation is a primary goal of robust fine-tuning of zero-shot models beyond naive fine-tuning. However, despite decent OOD generalization performance from recent robust fine-tuning methods, confidence calibration for reliable model output has not been fully addressed. This work proposes a robust fine-tuning method that improves both OOD accuracy and confidence calibration simultaneously in vision language models. Firstly, we show that both OOD classification and OOD calibration errors have a shared upper bound consisting of two terms of ID data: 1) ID calibration error and 2) the smallest singular value of the ID input covariance matrix. Based on this insight, we design a novel framework that conducts fine-tuning with a constrained multimodal contrastive loss enforcing a larger smallest singular value, which is further guided by the self-distillation of a moving-averaged model to achieve calibrated prediction as well. Starting from empirical evidence supporting our theoretical statements, we provide extensive experimental results on ImageNet distribution shift benchmarks that demonstrate the effectiveness of our theorem and its practical implementation.
- North America > United States > Wisconsin > Dane County > Madison (0.04)
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- Europe > France (0.04)
- Education (0.67)
- Government > Regional Government > North America Government > United States Government (0.46)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
dab1263d1e6a88c9ba5e7e294def5e8b-Supplemental.pdf
Supplementary Material for "T ensor Completion Made Practical" Run Jennrich's algorithm (see Section F.2.1) to decompose T Here we give an outline of the proof of Theorem 3.2. This is our main contribution. A robust analysis of Jennrich's algorithm implies that we can then estimate the rank one See Section F and Section G for details. C.1 Basic Facts We use the following notation: The following claim gives us a simple relation for this. C.4 Concentration Inequalities Claim C.8. Say we have real numbers γ x In particular we will prove Theorem B.1.